14 research outputs found

    Is it the real deal? Perception of virtual characters versus humans: an affective cognitive neuroscience perspective

    Get PDF
    Recent developments in neuroimaging research support the increased use of naturalistic stimulus material such as film, animations, or androids. These stimuli allow for a better understanding of how the brain processes information in complex situations while maintaining experimental control. While avatars and androids are well suited to study human cognition, they should not be equated to human stimuli. For example, the Uncanny Valley hypothesis theorizes that artificial agents with high human-likeness may evoke feelings of eeriness in the human observer. Here we review if, when, and how the perception of human-like avatars and androids differs from the perception of humans and consider how this influences their utilization as stimulus material in social and affective neuroimaging studies. First, we discuss how the appearance of virtual characters affects perception. When stimuli are morphed across categories from non-human to human, the most ambiguous stimuli, rather than the most human-like stimuli, show prolonged classification times and increased eeriness. Human-like to human stimuli show a positive linear relationship with familiarity. Secondly, we show that expressions of emotions in human-like avatars can be perceived similarly to human emotions, with corresponding behavioral, physiological and neuronal activations, with exception of physical dissimilarities. Subsequently, we consider if and when one perceives differences in action representation by artificial agents versus humans. Motor resonance and predictive coding models may account for empirical findings, such as an interference effect on action for observed human-like, natural moving characters. However, the expansion of these models to explain more complex behavior, such as empathy, still needs to be investigated in more detail. Finally, we broaden our outlook to social interaction, where virtual reality stimuli can be utilized to imitate complex social situations

    fMRI-based multivariate pattern analyses reveal imagery modality and imagery content specific representations in primary somatosensory, motor and auditory cortices

    No full text
    Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions

    Threat Detection in Nearby Space Mobilizes Human Ventral Premotor Cortex, Intraparietal Sulcus, and Amygdala

    No full text
    In the monkey brain, the precentral gyrus and ventral intraparietal area are two interconnected brain regions that form a system for detecting and responding to events in nearby “peripersonal” space (PPS), with threat detection as one of its major functions. Behavioral studies point toward a similar defensive function of PPS in humans. Here, our aim was to find support for this hypothesis by investigating if homolog regions in the human brain respond more strongly to approaching threatening stimuli. During fMRI scanning, naturalistic social stimuli were presented in a 3D virtual environment. Our results showed that the ventral premotor cortex and intraparietal sulcus responded more strongly to threatening stimuli entering PPS. Moreover, we found evidence for the involvement of the amygdala and anterior insula in processing threats. We propose that the defensive function of PPS may be supported by a subcortical circuit that sends information about the relevance of the stimulus to the premotor cortex and intraparietal sulcus, where action preparation is facilitated when necessary

    Amygdala responds to direct gaze in real but not in computer-generated faces

    No full text
    | openaire: EC/H2020/703493/EU//NEUROBUKIMIComputer-generated (CG) faces are an important visual interface for human-computer interaction in social contexts. Here we investigated whether the human brain processes emotion and gaze similarly in real and carefully matched CG faces. Real faces evoked greater responses in the fusiform face area than CG faces, particularly for fearful expressions. Emotional (angry and fearful) facial expressions evoked similar activations in the amygdala in real and CG faces. Direct as compared with averted gaze elicited greater fMRI responses in the amygdala regardless of facial expression but only for real and not for CG faces. We observed an interaction effect between gaze and emotion (i.e., the shared signal effect) in the right posterior temporal sulcus and other regions, but not in the amygdala, and we found no evidence for different shared signal effects in real and CG faces. Taken together, the present findings highlight similarities (emotional processing in the amygdala) and differences (overall processing in the fusiform face area, gaze processing in the amygdala) in the neural processing of real and CG faces.Peer reviewe

    Multimodal imaging: an evaluation of univariate and multivariate methods for simultaneous EEG/fMRI

    No full text
    The combination of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has been proposed as a tool to study brain dynamics with both high temporal and high spatial resolution. Multimodal imaging techniques rely on the assumption of a common neuronal source for the different recorded signals. In order to maximally exploit the combination of these techniques, one needs to understand the coupling (i.e., the relation) between electroencephalographic (EEG) and fMRI blood oxygen level-dependent (BOLD) signals. Recently, simultaneous EEG-fMRI measurements have been used to investigate the relation between the two signals. Previous attempts at the analysis of simultaneous EEG-fMRI data reported significant correlations between regional BOLD activations and modulation of both event-related potential (ERP) and oscillatory EEG power, mostly in the alpha but also in other frequency bands. Beyond the correlation of the two measured brain signals, the relevant issue we address here is the ability of predicting the signal in one modality using information from the other modality. Using multivariate machine learning-based regression, we show how it is possible to predict EEG power oscillations from simultaneously acquired fMRI data during an eyes-open/eyes-closed task using either the original channels or the underlying cortically distributed sources as the relevant EEG signal for the analysis of multimodal data
    corecore